lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140218222907.GD31892@mtj.dyndns.org>
Date:	Tue, 18 Feb 2014 17:29:07 -0500
From:	Tejun Heo <tj@...nel.org>
To:	Lai Jiangshan <laijs@...fujitsu.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] workqueue: async worker destruction

Hello, Lai.

On Tue, Feb 18, 2014 at 12:24:02AM +0800, Lai Jiangshan wrote:
> @@ -2295,9 +2285,16 @@ woke_up:
>  	if (unlikely(worker->flags & WORKER_DIE)) {
>  		spin_unlock_irq(&pool->lock);
>  		WARN_ON_ONCE(!list_empty(&worker->entry));
> +
> +		/* perform worker self-destruction */
> +		mutex_lock(&pool->manager_mutex);
> +		spin_lock_irq(&pool->lock);
> +		idr_remove(&pool->worker_idr, worker->id);
>  		worker->task->flags &= ~PF_WQ_WORKER;
>  		/* No one can access to @worker now, free it. */
>  		kfree(worker);
> +		spin_unlock_irq(&pool->lock);
> +		mutex_unlock(&pool->manager_mutex);

Hmm... is manager_mutex necessary for synchronization with
put_unbound_pool() path?

> @@ -3576,13 +3574,36 @@ static void put_unbound_pool(struct worker_pool *pool)
>  	 */
>  	mutex_lock(&pool->manager_arb);
>  	mutex_lock(&pool->manager_mutex);
> -	spin_lock_irq(&pool->lock);
>  
> +	spin_lock_irq(&pool->lock);
>  	while ((worker = first_worker(pool)))
>  		destroy_worker(worker);
>  	WARN_ON(pool->nr_workers || pool->nr_idle);
> -
>  	spin_unlock_irq(&pool->lock);
> +
> +	/* sync all workers dead */
> +	for_each_pool_worker(worker, wi, pool) {
> +		/*
> +		 * Although @worker->task was kicked to die, but we hold
> +		 * ->manager_mutex, it can't die, so we get its reference
> +		 * before drop ->manager_mutex. And we do sync until it die.
> +		 */
> +		get_task_struct(worker->task);
> +
> +		/*
> +		 * First, for_each_pool_worker() travels based on ID(@wi),
> +		 *   so it is safe even both ->manager_mutex and ->lock
> +		 *   are dropped inside the loop.
> +		 * Second, no worker can be added now, so the loop
> +		 *   ensures to travel all undead workers and sync them dead.
> +		 */
> +		mutex_unlock(&pool->manager_mutex);
> +
> +		kthread_stop(worker->task);
> +		put_task_struct(worker->task);
> +		mutex_lock(&pool->manager_mutex);
> +	}

I can't say I'm a fan of the above.  It's icky.  Why can't we do
simple set WORKER_DIE & wakeup thing here?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ