lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAN2Y7hwUmdFMM=mwYq7gsBpbSEBq6n0nXzmES4_=p3fDV=S+Ag@mail.gmail.com>
Date: Wed, 12 Nov 2025 10:01:10 +0800
From: ying chen <yc1082463@...il.com>
To: Tejun Heo <tj@...nel.org>
Cc: corbet@....net, jiangshanlai@...il.com, linux-doc@...r.kernel.org, 
	linux-kernel@...r.kernel.org, laoar.shao@...il.com
Subject: Re: [PATCH] workqueue: add workqueue.mayday_initial_timeout

On Wed, Nov 12, 2025 at 4:40 AM Tejun Heo <tj@...nel.org> wrote:
>
> Hello,
>
> On Tue, Nov 11, 2025 at 10:52:44AM +0800, ying chen wrote:
> > If creating a new worker takes longer than MAYDAY_INITIAL_TIMEOUT,
> > the rescuer thread will be woken up to process works scheduled on
> > @pool, resulting in sequential execution of all works. This may lead
> > to a situation where one work blocks others. However, the initial
> > rescue timeout defaults to 10 milliseconds, which can easily be
> > triggered in heavy-load environments.
>
> This is not how workqueue works. Rescuer doesn't exclude other workers. If
> other workers become available, they will run the workqueue concurrently.
> All that initial timeout achieves is delaying the initial execution from the
> rescuer.
>
> Is this from observing real behaviors? If so, what was the test case and how
> did the behavior after the patch? It couldn't have gotten better.
>
> Thanks.
>
> --
> tejun

We encountered an XFS deadlock issue. However, unlike the scenario
described in the patch,
 in our case the rescuer thread was still woken up even when memory
was sufficient, likely due to heavy load.
patch: xfs: don't use BMBT btree split workers for IO completion
(c85007e2e3942da1f9361e4b5a9388ea3a8dcc5b)

Works that have already been scheduled will be executed sequentially
within the rescuer thread.
static int rescuer_thread(void *__rescuer)
{
                ......
                /*
                 * Slurp in all works issued via this workqueue and
                 * process'em.
                 */
                WARN_ON_ONCE(!list_empty(scheduled));
                list_for_each_entry_safe(work, n, &pool->worklist, entry) {
                        if (get_work_pwq(work) == pwq) {
                                if (first)
                                        pool->watchdog_ts = jiffies;
                                move_linked_works(work, scheduled, &n);
                        }
                        first = false;
                }

                if (!list_empty(scheduled)) {
                        process_scheduled_works(rescuer);
                        ......
                }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ