lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zdvw0HdSXcU3JZ4g@boqun-archlinux>
Date: Sun, 25 Feb 2024 18:00:48 -0800
From: Boqun Feng <boqun.feng@...il.com>
To: Tejun Heo <tj@...nel.org>
Cc: torvalds@...ux-foundation.org, mpatocka@...hat.com,
	linux-kernel@...r.kernel.org, dm-devel@...ts.linux.dev,
	msnitzer@...hat.com, ignat@...udflare.com, damien.lemoal@....com,
	bob.liu@...cle.com, houtao1@...wei.com, peterz@...radead.org,
	mingo@...nel.org, netdev@...r.kernel.org, allen.lkml@...il.com,
	kernel-team@...a.com, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v3 3/8] workqueue: Implement BH workqueues to eventually
 replace tasklets

On Sun, Feb 04, 2024 at 11:29:46AM -1000, Tejun Heo wrote:
> >From 4cb1ef64609f9b0254184b2947824f4b46ccab22 Mon Sep 17 00:00:00 2001
> From: Tejun Heo <tj@...nel.org>
> Date: Sun, 4 Feb 2024 11:28:06 -1000
> 
> The only generic interface to execute asynchronously in the BH context is
> tasklet; however, it's marked deprecated and has some design flaws such as
> the execution code accessing the tasklet item after the execution is
> complete which can lead to subtle use-after-free in certain usage scenarios
> and less-developed flush and cancel mechanisms.
> 
> This patch implements BH workqueues which share the same semantics and
> features of regular workqueues but execute their work items in the softirq
> context. As there is always only one BH execution context per CPU, none of
> the concurrency management mechanisms applies and a BH workqueue can be
> thought of as a convenience wrapper around softirq.
> 
> Except for the inability to sleep while executing and lack of max_active
> adjustments, BH workqueues and work items should behave the same as regular
> workqueues and work items.
> 
> Currently, the execution is hooked to tasklet[_hi]. However, the goal is to
> convert all tasklet users over to BH workqueues. Once the conversion is
> complete, tasklet can be removed and BH workqueues can directly take over
> the tasklet softirqs.
> 
> system_bh[_highpri]_wq are added. As queue-wide flushing doesn't exist in
> tasklet, all existing tasklet users should be able to use the system BH
> workqueues without creating their own workqueues.
> 
> v3: - Add missing interrupt.h include.
> 
> v2: - Instead of using tasklets, hook directly into its softirq action
>       functions - tasklet[_hi]_action(). This is slightly cheaper and closer
>       to the eventual code structure we want to arrive at. Suggested by Lai.
> 
>     - Lai also pointed out several places which need NULL worker->task
>       handling or can use clarification. Updated.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Link: http://lkml.kernel.org/r/CAHk-=wjDW53w4-YcSmgKC5RruiRLHmJ1sXeYdp_ZgVoBw=5byA@mail.gmail.com
> Tested-by: Allen Pais <allen.lkml@...il.com>
> Reviewed-by: Lai Jiangshan <jiangshanlai@...il.com>
> ---
>  Documentation/core-api/workqueue.rst |  29 ++-
>  include/linux/workqueue.h            |  11 +
>  kernel/softirq.c                     |   3 +
>  kernel/workqueue.c                   | 291 ++++++++++++++++++++++-----
>  tools/workqueue/wq_dump.py           |  11 +-
>  5 files changed, 285 insertions(+), 60 deletions(-)
> 
> diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst
> index 33c4539155d9..2d6af6c4665c 100644
> --- a/Documentation/core-api/workqueue.rst
> +++ b/Documentation/core-api/workqueue.rst
> @@ -77,10 +77,12 @@ wants a function to be executed asynchronously it has to set up a work
>  item pointing to that function and queue that work item on a
>  workqueue.
>  
> -Special purpose threads, called worker threads, execute the functions
> -off of the queue, one after the other.  If no work is queued, the
> -worker threads become idle.  These worker threads are managed in so
> -called worker-pools.
> +A work item can be executed in either a thread or the BH (softirq) context.
> +
> +For threaded workqueues, special purpose threads, called [k]workers, execute
> +the functions off of the queue, one after the other. If no work is queued,
> +the worker threads become idle. These worker threads are managed in
> +worker-pools.
>  
>  The cmwq design differentiates between the user-facing workqueues that
>  subsystems and drivers queue work items on and the backend mechanism
> @@ -91,6 +93,12 @@ for high priority ones, for each possible CPU and some extra
>  worker-pools to serve work items queued on unbound workqueues - the
>  number of these backing pools is dynamic.
>  
> +BH workqueues use the same framework. However, as there can only be one
> +concurrent execution context, there's no need to worry about concurrency.
> +Each per-CPU BH worker pool contains only one pseudo worker which represents
> +the BH execution context. A BH workqueue can be considered a convenience
> +interface to softirq.
> +
>  Subsystems and drivers can create and queue work items through special
>  workqueue API functions as they see fit. They can influence some
>  aspects of the way the work items are executed by setting flags on the
> @@ -106,7 +114,7 @@ unless specifically overridden, a work item of a bound workqueue will
>  be queued on the worklist of either normal or highpri worker-pool that
>  is associated to the CPU the issuer is running on.
>  
> -For any worker pool implementation, managing the concurrency level
> +For any thread pool implementation, managing the concurrency level
>  (how many execution contexts are active) is an important issue.  cmwq
>  tries to keep the concurrency at a minimal but sufficient level.
>  Minimal to save resources and sufficient in that the system is used at
> @@ -164,6 +172,17 @@ resources, scheduled and executed.
>  ``flags``
>  ---------
>  
> +``WQ_BH``
> +  BH workqueues can be considered a convenience interface to softirq. BH
> +  workqueues are always per-CPU and all BH work items are executed in the
> +  queueing CPU's softirq context in the queueing order.
> +
> +  All BH workqueues must have 0 ``max_active`` and ``WQ_HIGHPRI`` is the
> +  only allowed additional flag.
> +
> +  BH work items cannot sleep. All other features such as delayed queueing,
> +  flushing and canceling are supported.
> +
>  ``WQ_UNBOUND``
>    Work items queued to an unbound wq are served by the special
>    worker-pools which host workers which are not bound to any
> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> index 232baea90a1d..283d7891b4c4 100644
> --- a/include/linux/workqueue.h
> +++ b/include/linux/workqueue.h
> @@ -353,6 +353,7 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
>   * Documentation/core-api/workqueue.rst.
>   */
>  enum wq_flags {
> +	WQ_BH			= 1 << 0, /* execute in bottom half (softirq) context */
>  	WQ_UNBOUND		= 1 << 1, /* not bound to any cpu */
>  	WQ_FREEZABLE		= 1 << 2, /* freeze during suspend */
>  	WQ_MEM_RECLAIM		= 1 << 3, /* may be used for memory reclaim */
> @@ -392,6 +393,9 @@ enum wq_flags {
>  	__WQ_ORDERED		= 1 << 17, /* internal: workqueue is ordered */
>  	__WQ_LEGACY		= 1 << 18, /* internal: create*_workqueue() */
>  	__WQ_ORDERED_EXPLICIT	= 1 << 19, /* internal: alloc_ordered_workqueue() */
> +
> +	/* BH wq only allows the following flags */
> +	__WQ_BH_ALLOWS		= WQ_BH | WQ_HIGHPRI,
>  };
>  
>  enum wq_consts {
> @@ -434,6 +438,9 @@ enum wq_consts {
>   * they are same as their non-power-efficient counterparts - e.g.
>   * system_power_efficient_wq is identical to system_wq if
>   * 'wq_power_efficient' is disabled.  See WQ_POWER_EFFICIENT for more info.
> + *
> + * system_bh[_highpri]_wq are convenience interface to softirq. BH work items
> + * are executed in the queueing CPU's BH context in the queueing order.
>   */
>  extern struct workqueue_struct *system_wq;
>  extern struct workqueue_struct *system_highpri_wq;
> @@ -442,6 +449,10 @@ extern struct workqueue_struct *system_unbound_wq;
>  extern struct workqueue_struct *system_freezable_wq;
>  extern struct workqueue_struct *system_power_efficient_wq;
>  extern struct workqueue_struct *system_freezable_power_efficient_wq;
> +extern struct workqueue_struct *system_bh_wq;
> +extern struct workqueue_struct *system_bh_highpri_wq;
> +
> +void workqueue_softirq_action(bool highpri);
>  
>  /**
>   * alloc_workqueue - allocate a workqueue
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index 210cf5f8d92c..547d282548a8 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -27,6 +27,7 @@
>  #include <linux/tick.h>
>  #include <linux/irq.h>
>  #include <linux/wait_bit.h>
> +#include <linux/workqueue.h>
>  
>  #include <asm/softirq_stack.h>
>  
> @@ -802,11 +803,13 @@ static void tasklet_action_common(struct softirq_action *a,
>  
>  static __latent_entropy void tasklet_action(struct softirq_action *a)
>  {
> +	workqueue_softirq_action(false);
>  	tasklet_action_common(a, this_cpu_ptr(&tasklet_vec), TASKLET_SOFTIRQ);
>  }
>  
>  static __latent_entropy void tasklet_hi_action(struct softirq_action *a)
>  {
> +	workqueue_softirq_action(true);
>  	tasklet_action_common(a, this_cpu_ptr(&tasklet_hi_vec), HI_SOFTIRQ);
>  }
>  
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 767971a29c7a..78b4b992e1a3 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -29,6 +29,7 @@
>  #include <linux/kernel.h>
>  #include <linux/sched.h>
>  #include <linux/init.h>
> +#include <linux/interrupt.h>
>  #include <linux/signal.h>
>  #include <linux/completion.h>
>  #include <linux/workqueue.h>
> @@ -72,8 +73,12 @@ enum worker_pool_flags {
>  	 * Note that DISASSOCIATED should be flipped only while holding
>  	 * wq_pool_attach_mutex to avoid changing binding state while
>  	 * worker_attach_to_pool() is in progress.
> +	 *
> +	 * As there can only be one concurrent BH execution context per CPU, a
> +	 * BH pool is per-CPU and always DISASSOCIATED.
>  	 */
> -	POOL_MANAGER_ACTIVE	= 1 << 0,	/* being managed */
> +	POOL_BH			= 1 << 0,	/* is a BH pool */
> +	POOL_MANAGER_ACTIVE	= 1 << 1,	/* being managed */
>  	POOL_DISASSOCIATED	= 1 << 2,	/* cpu can't serve workers */
>  };
>  
> @@ -115,6 +120,14 @@ enum wq_internal_consts {
>  	WQ_NAME_LEN		= 32,
>  };
>  
> +/*
> + * We don't want to trap softirq for too long. See MAX_SOFTIRQ_TIME and
> + * MAX_SOFTIRQ_RESTART in kernel/softirq.c. These are macros because
> + * msecs_to_jiffies() can't be an initializer.
> + */
> +#define BH_WORKER_JIFFIES	msecs_to_jiffies(2)
> +#define BH_WORKER_RESTARTS	10

Sorry, late to the party, but I wonder how this play along with cpu
hotplug? Say we've queued a lot BH_WORK on a CPU, and we offline that
cpu, wouldn't that end up with a few BH_WORK left on that CPU not being
executed?

[Cc Thomas]

Regards,
Boqun

> +
[..]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ