lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251125063617.671199-2-jiangshanlai@gmail.com>
Date: Tue, 25 Nov 2025 14:36:14 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>,
	ying chen <yc1082463@...il.com>,
	Lai Jiangshan <jiangshan.ljs@...group.com>,
	Lai Jiangshan <jiangshanlai@...il.com>
Subject: [PATCH V4 1/4] workqueue: Loop over in rescuer until all its work is done

From: Lai Jiangshan <jiangshan.ljs@...group.com>

Simplify the rescuer work by looping directly in the rescuer rather than
adding the pwq back to the maydays list. This also helps when
max_requests is 1 or small but pwq->inactive_works has a large number of
pending work items.

This might hurt fairness among PWQs and the rescuer could end up being
stuck on one PWQ indefinitely, but the rescuer's objective is to make
forward progress rather than ensure fairness.

Fairness can be further improved in future by assigning work items to
the rescuer one by one and this is a temporary change to ease the
transition.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
 kernel/workqueue.c | 24 +-----------------------
 1 file changed, 1 insertion(+), 23 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 2654fbd481a1..02386e6eb409 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3520,31 +3520,9 @@ static int rescuer_thread(void *__rescuer)
 
 		WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
 
-		if (assign_rescuer_work(pwq, rescuer)) {
+		while (assign_rescuer_work(pwq, rescuer))
 			process_scheduled_works(rescuer);
 
-			/*
-			 * The above execution of rescued work items could
-			 * have created more to rescue through
-			 * pwq_activate_first_inactive() or chained
-			 * queueing.  Let's put @pwq back on mayday list so
-			 * that such back-to-back work items, which may be
-			 * being used to relieve memory pressure, don't
-			 * incur MAYDAY_INTERVAL delay inbetween.
-			 */
-			if (pwq->nr_active && need_to_create_worker(pool)) {
-				raw_spin_lock(&wq_mayday_lock);
-				/*
-				 * Queue iff somebody else hasn't queued it already.
-				 */
-				if (list_empty(&pwq->mayday_node)) {
-					get_pwq(pwq);
-					list_add_tail(&pwq->mayday_node, &wq->maydays);
-				}
-				raw_spin_unlock(&wq_mayday_lock);
-			}
-		}
-
 		/*
 		 * Leave this pool. Notify regular workers; otherwise, we end up
 		 * with 0 concurrency and stalling the execution.
-- 
2.19.1.6.gb485710b


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ