lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120907192556.GE9426@google.com>
Date:	Fri, 7 Sep 2012 12:25:56 -0700
From:	Tejun Heo <tj@...nel.org>
To:	Lai Jiangshan <laijs@...fujitsu.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH wq/for-3.6-fixes 3/3] workqueue: fix possible idle
 worker depletion during CPU_ONLINE

Hello, Lai.

On Fri, Sep 07, 2012 at 09:53:25AM +0800, Lai Jiangshan wrote:
> > This patch fixes the bug by releasing manager_mutexes before letting
> > the rebound idle workers go.  This ensures that by the time idle
> > workers check whether management is necessary, CPU_ONLINE already has
> > released the positions.
> 
> This can't fix the problem.
> 
> +	gcwq_claim_management(gcwq);
> +	spin_lock_irq(&gcwq->lock);
> 
> 
> If manage_workers() happens between the two line, the problem occurs!.

Indeed.  I was only looking at rebinding completion.  Hmmm... I
suppose any simple solution is out of window at this point.  I guess
we'll have to defer the fix to 3.7.  I reverted the posted patches.

> My non_manager_role_manager_mutex_unlock() approach has the same
> idea: release manage_mutex before release gcwq->lock.  but
> non_manager_role_manager_mutex_unlock() approach will detect the
> fail reason of failing to grab manage_lock and go to sleep.
> rebind_workers()/gcwq_unbind_fn() will release manage_mutex and then
> wait up some before release gcwq->lock.

Can you please try to fit the text to 80 column?  It would be much
easier to read.

> A "release manage_mutex before release gcwq->lock" approach.(no one
> likes it, I think)
> 
> 
> /* claim manager positions of all pools */
> static void gcwq_claim_management_and_lock(struct global_cwq *gcwq)
> {
> 	struct worker_pool *pool, *pool_fail;
> 
> again:
> 	spin_lock_irq(&gcwq->lock);
> 	for_each_worker_pool(pool, gcwq) {
> 		if (!mutex_trylock(&pool->manager_mutex))
> 			goto fail;
> 	}
> 	return;
> 
> fail: /* unlikely, because manage_workers() are very unlike path in my box */
> 	
> 	for_each_worker_pool(pool_fail, gcwq) {
> 		if (pool_fail != pool)
> 			mutex_unlock(&pool->manager_mutex);
> 		else
> 			break;
> 	}
> 	spin_unlock_irq(&gcwq->lock);
> 	cpu_relax();
> 	goto again;
> }

Yeah, that's kinda ugly and also has the potential to cause extended
period of busy looping.  Let's think of something else.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ