[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251224161301.135382-1-marco.crivellari@suse.com>
Date: Wed, 24 Dec 2025 17:13:01 +0100
From: Marco Crivellari <marco.crivellari@...e.com>
To: linux-kernel@...r.kernel.org,
openipmi-developer@...ts.sourceforge.net
Cc: Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Marco Crivellari <marco.crivellari@...e.com>,
Michal Hocko <mhocko@...e.com>,
Corey Minyard <corey@...yard.net>
Subject: [PATCH] ipmi: Replace use of system_wq with system_percpu_wq
This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.
Before that to happen after a careful review and conversion of each individual
case, workqueue users must be converted to the better named new workqueues with
no intended behaviour changes:
system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq
This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.
Suggested-by: Tejun Heo <tj@...nel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@...e.com>
---
drivers/char/ipmi/ipmi_msghandler.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index 3f48fc6ab596..ebdc8f683981 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -973,7 +973,7 @@ static int deliver_response(struct ipmi_smi *intf, struct ipmi_recv_msg *msg)
mutex_lock(&intf->user_msgs_mutex);
list_add_tail(&msg->link, &intf->user_msgs);
mutex_unlock(&intf->user_msgs_mutex);
- queue_work(system_wq, &intf->smi_work);
+ queue_work(system_percpu_wq, &intf->smi_work);
}
return rv;
@@ -4935,7 +4935,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
if (run_to_completion)
smi_work(&intf->smi_work);
else
- queue_work(system_wq, &intf->smi_work);
+ queue_work(system_percpu_wq, &intf->smi_work);
}
EXPORT_SYMBOL(ipmi_smi_msg_received);
@@ -4945,7 +4945,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
return;
atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
- queue_work(system_wq, &intf->smi_work);
+ queue_work(system_percpu_wq, &intf->smi_work);
}
EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
@@ -5115,7 +5115,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
flags);
}
- queue_work(system_wq, &intf->smi_work);
+ queue_work(system_percpu_wq, &intf->smi_work);
return need_timer;
}
@@ -5171,7 +5171,7 @@ static void ipmi_timeout(struct timer_list *unused)
if (atomic_read(&stop_operation))
return;
- queue_work(system_wq, &ipmi_timer_work);
+ queue_work(system_percpu_wq, &ipmi_timer_work);
}
static void need_waiter(struct ipmi_smi *intf)
--
2.52.0
Powered by blists - more mailing lists