[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRSvxyoWiqzcBj-N@slm.duckdns.org>
Date: Wed, 12 Nov 2025 06:03:19 -1000
From: Tejun Heo <tj@...nel.org>
To: ying chen <yc1082463@...il.com>
Cc: corbet@....net, jiangshanlai@...il.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, laoar.shao@...il.com
Subject: Re: [PATCH] workqueue: add workqueue.mayday_initial_timeout
Hello,
On Wed, Nov 12, 2025 at 10:01:10AM +0800, ying chen wrote:
> Works that have already been scheduled will be executed sequentially
> within the rescuer thread.
> static int rescuer_thread(void *__rescuer)
> {
> ......
> /*
> * Slurp in all works issued via this workqueue and
> * process'em.
> */
> WARN_ON_ONCE(!list_empty(scheduled));
> list_for_each_entry_safe(work, n, &pool->worklist, entry) {
> if (get_work_pwq(work) == pwq) {
> if (first)
> pool->watchdog_ts = jiffies;
> move_linked_works(work, scheduled, &n);
> }
> first = false;
> }
>
> if (!list_empty(scheduled)) {
> process_scheduled_works(rescuer);
> ......
> }
Ah, I see what you mean. The slurping is to avoid potentially O(N^2)
scanning but that probably the wrong trade-off to make here. I think the
right solution is making it break out after finding the first matching work
item and loop outside so that it processes work item one by one.
Thanks.
--
tejun
Powered by blists - more mailing lists