lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 08 Sep 2010 16:10:55 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Dave Chinner <david@...morbit.com>
CC:	linux-kernel@...r.kernel.org, xfs@....sgi.com,
	linux-fsdevel@...r.kernel.org
Subject: Re: [2.6.36-rc3] Workqueues, XFS, dependencies and deadlocks

Hello,

On 09/08/2010 12:05 PM, Dave Chinner wrote:
>> * Do you think @max_active > 1 could be useful for xfs?  If most works
>>   queued on the wq are gonna contend for the same (blocking) set of
>>   resources, it would just make more threads sleeping on those
>>   resources but otherwise it would help reducing execution latency a
>>   lot.
> 
> It may indeed help, but I can't really say much more than that right
> now. I need a deeper understanding of the impact of increasing
> max_active (I have a basic understanding now) before I could say for
> certain.

Sure, things should be fine as they currently stand.  No need to hurry
anything.

>> * xfs_mru_cache is a singlethread workqueue.  Do you specifically need
>>   singlethreadedness (strict ordering of works) or is it just to avoid
>>   creating dedicated per-cpu workers?  If the latter, there's no need
>>   to use singlethread one anymore.
> 
> Didn't need per-cpu workers, so could probably drop it now.

I see.  I'll soon send out a patch to convert xfs to use
alloc_workqueue() instead and will drop singlethread restriction
there.

>>   Maybe some of those workqueues can drop WQ_RESCUER or merged or just
>>   use the system workqueue?
> 
> Maybe the mru wq can use the system wq, but I'm really opposed to
> merging XFS wqs with system work queues simply from a debugging POV.
> I've lost count of the number of times I've walked the IO completion
> queueѕ with a debugger or crash dump analyser to try to work out if
> missing IO that wedged the filesystem got stuck on the completion
> queue. If I want to be able to say "the IO was lost by a lower
> layer", then I have to be able to confirm it is not stuck in a
> completion queue. That much harder if I don't know what the work
> container objects on the queue are....

Hmm... that's gonna be a bit more difficult with cmwq as all the works
are now queued on the shared worklist but you should still be able to
tell.  Maybe crash can be taught how to tell the associated workqueue
from a pending work.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ