lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210817013239.3921-1-jiangshanlai@gmail.com>
Date:   Tue, 17 Aug 2021 09:32:33 +0800
From:   Lai Jiangshan <jiangshanlai@...il.com>
To:     linux-kernel@...r.kernel.org
Cc:     lizefan.x@...edance.com, lizhe.67@...edance.com,
        Tejun Heo <tj@...nel.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Lai Jiangshan <laijs@...ux.alibaba.com>
Subject: [PATCH 0/6] workqueue: Make flush_workqueue() also watch flush_work()

From: Lai Jiangshan <laijs@...ux.alibaba.com>

Li reported a problem[1] that the sanity check in destroy_workqueue() catches
a WARNING due to drain_workqueue() doesn't wait for flush_work() to finish.

The warning logs are listed below:

WARNING: CPU: 0 PID: 19336 at kernel/workqueue.c:4430 destroy_workqueue+0x11a/0x2f0
*****
destroy_workqueue: test_workqueue9 has the following busy pwq
  pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=0/1 refcnt=2
      in-flight: 5658:wq_barrier_func
Showing busy workqueues and worker pools:
*****

It shows that even after drain_workqueue() returns, the barrier work item
is still in flight and the pwq (and a worker) is still busy on it.

The problem is caused by drain_workqueue() not watching flush_work():

Thread A				Worker
					/* normal work item with linked */
					process_scheduled_works()
destroy_workqueue()			  process_one_work()
  drain_workqueue()			    /* run normal work item */
				 /--	    pwq_dec_nr_in_flight()
    flush_workqueue()	    <---/
		/* the last normal work item is done */
  sanity_check				  process_one_work()
				       /--  raw_spin_unlock_irq(&pool->lock)
    raw_spin_lock_irq(&pool->lock)  <-/     /* maybe preempt */
    *WARNING*				    wq_barrier_func()
					    /* maybe preempt by cond_resched() */

So the solution is to make drain_workqueue() watch for flush_work() which
means making flush_workqueue() watch for flush_work().

Due to historical convenience, we used WORK_NO_COLOR for barrier work items
queued by flush_work().  The color has two purposes:
	Not participate in flushing
	Not participate in nr_active

Only the second purpose is obligatory.  So the plan is to mark barrier
work items inactive without using WORK_NO_COLOR in patch4 so that we can
assign a flushing color to them in patch5.

Patch1-3 are preparation, and patch6 is a cleanup.

[1]: https://lore.kernel.org/lkml/20210812083814.32453-1-lizhe.67@bytedance.com/

Lai Jiangshan (6):
  workqueue: Rename "delayed" (delayed by active management) to
    "inactive"
  workqueue: Change arguement of pwq_dec_nr_in_flight()
  workqueue: Change the code of calculating work_flags in
    insert_wq_barrier()
  workqueue: Mark barrier work with WORK_STRUCT_INACTIVE
  workqueue: Assign a color to barrier work items
  workqueue: Remove unused WORK_NO_COLOR

 include/linux/workqueue.h   |  13 ++--
 kernel/workqueue.c          | 133 ++++++++++++++++++++++--------------
 kernel/workqueue_internal.h |   3 +-
 3 files changed, 88 insertions(+), 61 deletions(-)

-- 
2.19.1.6.gb485710b

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ