lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 8 Sep 2010 20:12:22 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org, xfs@....sgi.com,
	linux-fsdevel@...r.kernel.org
Subject: Re: [2.6.36-rc3] Workqueues, XFS, dependencies and deadlocks

On Wed, Sep 08, 2010 at 10:46:13AM +0200, Tejun Heo wrote:
> On 09/08/2010 10:28 AM, Dave Chinner wrote:
> >> They may if necessary to keep the workqueue progressing.
> > 
> > Ok, so the normal case is that they will all be processed local to the
> > CPU they were queued on, like the old workqueue code?
> 
> Bound workqueues always process works locally.  Please consider the
> following scenario.
> 
>  w0, w1, w2 are queued to q0 on the same CPU.  w0 burns CPU for 5ms
>  then sleeps for 10ms then burns CPU for 5ms again then finishes.  w1
>  and w2 sleeps for 10ms.
> 
> The following is what happens with the original workqueue (ignoring
> all other tasks and processing overhead).
> 
>  TIME IN MSECS	EVENT
>  0		w0 burns CPU
>  5		w0 sleeps
>  15		w0 wakes and burns CPU
>  20		w0 finishes, w1 starts and sleeps
>  30		w1 finishes, w2 starts and sleeps
>  40		w2 finishes
> 
> With cmwq if @max_active >= 3,
> 
>  TIME IN MSECS	EVENT
>  0		w0 burns CPU
>  5		w0 sleeps, w1 starts and sleeps, w2 starts and sleeps
>  15		w0 wakes and burns CPU, w1 finishes, w2 finishes
>  20		w0 finishes
> 
> IOW, cmwq assigns a new worker when there are more work items to
> process but no work item is currently in progress on the CPU.  Please
> note that this behavior is across *all* workqueues.  It doesn't matter
> which work item belongs to which workqueue.

Ok, so in this case if this was on CPU 1, I'd see kworker[1:0],
kworker[1:1] and kworker[1:2] threads all accumulate CPU time?  I'm
just trying to relate your example it to behaviour I've seen to
check if I understand the example correctly.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ