lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 7 Feb 2024 07:25:12 -1000
From: Tejun Heo <tj@...nel.org>
To: Waiman Long <longman@...hat.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>, linux-kernel@...r.kernel.org,
	Juri Lelli <juri.lelli@...hat.com>,
	Cestmir Kalina <ckalina@...hat.com>,
	Alex Gladkov <agladkov@...hat.com>, Phil Auld <pauld@...hat.com>,
	Costa Shulyupin <cshulyup@...hat.com>
Subject: Re: [PATCH wq/for-6.9 v4 2/4] workqueue: Enable unbound cpumask
 update on ordered workqueues

Hello, Waiman.

On Tue, Feb 06, 2024 at 08:19:09PM -0500, Waiman Long wrote:
..
> + * The unplugging is done either in apply_wqattrs_cleanup() [fast path] when
> + * the workqueue was idle or in pwq_release_workfn() [slow path] when the
> + * workqueue was busy.

I'm not sure the distinction between fast and slow paths is all that useful
here. Both are really cold paths.

> +static void unplug_oldest_pwq(struct workqueue_struct *wq,
> +			      struct pool_workqueue *exlude_pwq)
> +{
> +	struct pool_workqueue *pwq;
> +	unsigned long flags;
> +	bool found = false;
> +
> +	for_each_pwq(pwq, wq) {
> +		if (pwq == exlude_pwq)
> +			continue;
> +		if (!pwq->plugged)
> +			return;	/* No unplug needed */
> +		found = true;
> +		break;
> +	}
> +	if (WARN_ON_ONCE(!found))
> +		return;
> +
> +	raw_spin_lock_irqsave(&pwq->pool->lock, flags);
> +	if (!pwq->plugged)
> +		goto out_unlock;
> +	pwq->plugged = false;
> +	if (pwq_activate_first_inactive(pwq, true))
> +		kick_pool(pwq->pool);
> +out_unlock:
> +	raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
> +}

I don't quite understand why this needs iteration and @exclude_pwq.
Shouldn't something like the following be enough?

static void unplug_oldest_pwq(struct workqueue_struct *wq)
{
	struct pool_workqueue *pwq;

	raw_spin_lock_irq(&pwq->pool->lock);
	pwq = list_first_entry_or_null(&pwq->pwqs, ...);
	if (pwq)
		pwq->plugged = false;
	raw_spin_unlock_irq(&pwq->pool->lock);
}

> @@ -4740,6 +4796,13 @@ static void pwq_release_workfn(struct kthread_work *work)
>  		mutex_lock(&wq->mutex);
>  		list_del_rcu(&pwq->pwqs_node);
>  		is_last = list_empty(&wq->pwqs);
> +
> +		/*
> +		 * For ordered workqueue with a plugged dfl_pwq, restart it now.
> +		 */
> +		if (!is_last && (wq->flags & __WQ_ORDERED))
> +			unplug_oldest_pwq(wq, NULL);

This makes sense.

> @@ -4906,8 +4969,26 @@ static void apply_wqattrs_cleanup(struct apply_wqattrs_ctx *ctx)
..
> +		/*
> +		 * It is possible that ctx->dfl_pwq (previous wq->dfl_pwq)
> +		 * may not be the oldest one with the plugged flag still set.
> +		 * unplug_oldest_pwq() will still do the right thing to allow
> +		 * only one unplugged pwq in the workqueue.
> +		 */
> +		if ((ctx->wq->flags & __WQ_ORDERED) &&
> +		     ctx->dfl_pwq && !ctx->dfl_pwq->refcnt)
> +			unplug_oldest_pwq(ctx->wq, ctx->dfl_pwq);
> +		rcu_read_unlock();

But why do we need this? Isn't all that needed to call unplug_oldest during
workqueue initialization and chaining unplugging from pwq release from there
on?

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ