lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Dec 2014 15:43:04 -0500
From:	Tejun Heo <tj@...nel.org>
To:	NeilBrown <neilb@...e.de>
Cc:	Jan Kara <jack@...e.cz>, Lai Jiangshan <laijs@...fujitsu.com>,
	Dongsu Park <dongsu.park@...fitbricks.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH - v3?] workqueue: allow rescuer thread to do more work.

Hello,

On Tue, Nov 18, 2014 at 03:27:54PM +1100, NeilBrown wrote:
> @@ -2253,26 +2253,36 @@ repeat:
>  					struct pool_workqueue, mayday_node);
>  		struct worker_pool *pool = pwq->pool;
>  		struct work_struct *work, *n;
> +		int still_needed;
>  
>  		__set_current_state(TASK_RUNNING);
> -		list_del_init(&pwq->mayday_node);
> -
> -		spin_unlock_irq(&wq_mayday_lock);
> -
> -		worker_attach_to_pool(rescuer, pool);
> -
> -		spin_lock_irq(&pool->lock);
> -		rescuer->pool = pool;
> -
> +		spin_lock(&pool->lock);
>  		/*
>  		 * Slurp in all works issued via this workqueue and
>  		 * process'em.
>  		 */
>  		WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
> +		still_needed = need_to_create_worker(pool);
>  		list_for_each_entry_safe(work, n, &pool->worklist, entry)
>  			if (get_work_pwq(work) == pwq)
>  				move_linked_works(work, scheduled, &n);
>  
> +		if (!list_empty(scheduled))
> +			still_needed = 1;
> +		if (still_needed) {
> +			list_move_tail(&pwq->mayday_node, &wq->maydays);
> +			get_pwq(pwq);
> +		} else
> +			/* We can let go of this one now */
> +			list_del_init(&pwq->mayday_node);

This seems rather convoluted.  Why are we testing this before
executing the work item?  Can't we do this after?  Isn't that -
whether the wq still needs rescuing after rescuer went through it once
- what we wanna know anyway?  e.g. something like the following.

	for_each_pwq_on_mayday_list {
		try to fetch work items from pwq->pool;
		if (none was fetched)
			goto remove_pwq;

		execute the fetched work items;

		if (need_to_create_worker()) {
			move the pwq to the tail;
			continue;
		}

	remove_pwq:
		remove the pwq;
	}

> +
> +		spin_unlock(&pool->lock);
> +		spin_unlock_irq(&wq_mayday_lock);
> +
> +		worker_attach_to_pool(rescuer, pool);
> +
> +		spin_lock_irq(&pool->lock);
> +		rescuer->pool = pool;
>  		process_scheduled_works(rescuer);
>  
>  		/*
> @@ -2293,7 +2303,7 @@ repeat:
>  		spin_unlock_irq(&pool->lock);
>  
>  		worker_detach_from_pool(rescuer, pool);
> -
> +		cond_resched();

Also, why this addition?  process_one_work() already has
cond_resched_rcu_qs().

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ