[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251121145720.342467-5-jiangshanlai@gmail.com>
Date: Fri, 21 Nov 2025 22:57:17 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>,
ying chen <yc1082463@...il.com>,
Lai Jiangshan <jiangshan.ljs@...group.com>,
Lai Jiangshan <jiangshanlai@...il.com>
Subject: [PATCH V3 4/7] workqueue: Loop over in rescuer until all its work is done
From: Lai Jiangshan <jiangshan.ljs@...group.com>
Simplify the rescuer work by looping directly in the rescuer rather than
adding the pwq back to the maydays list. This also helps when
max_requests is 1 or small but pwq->inactive_works has a large number of
pending work items.
This might hurt fairness among PWQs and the rescuer could end up being
stuck on one PWQ indefinitely, but the rescuer's objective is to make
forward progress rather than ensure fairness.
Fairness can be further improved in future by assigning work items to
the rescuer one by one.
Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
kernel/workqueue.c | 24 +-----------------------
1 file changed, 1 insertion(+), 23 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 943fa27e272b..3032235a131e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3526,31 +3526,9 @@ static int rescuer_thread(void *__rescuer)
WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
- if (assign_rescuer_work(pwq, rescuer)) {
+ while (assign_rescuer_work(pwq, rescuer))
process_scheduled_works(rescuer);
- /*
- * The above execution of rescued work items could
- * have created more to rescue through
- * pwq_activate_first_inactive() or chained
- * queueing. Let's put @pwq back on mayday list so
- * that such back-to-back work items, which may be
- * being used to relieve memory pressure, don't
- * incur MAYDAY_INTERVAL delay inbetween.
- */
- if (pwq->nr_active && need_to_create_worker(pool)) {
- raw_spin_lock(&wq_mayday_lock);
- /*
- * Queue iff somebody else hasn't queued it already.
- */
- if (list_empty(&pwq->mayday_node)) {
- get_pwq(pwq);
- list_add_tail(&pwq->mayday_node, &wq->maydays);
- }
- raw_spin_unlock(&wq_mayday_lock);
- }
- }
-
/*
* Leave this pool. Notify regular workers; otherwise, we end up
* with 0 concurrency and stalling the execution.
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists