lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Sep 2010 16:26:58 +0200
From:	Florian Mickler <florian@...kler.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	lkml <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH UPDATED] workqueue: add documentation

On Fri, 10 Sep 2010 12:25:55 +0200
Tejun Heo <tj@...nel.org> wrote:

> +Concurrency Managed Workqueue (cmwq)
> +
> +September, 2010		Tejun Heo <tj@...nel.org>
> +			Florian Mickler <florian@...kler.org>
> +
> +CONTENTS

Thx.


I fumbled a bit with the ordering in the design
description.. ok so?

Cheers,
Flo

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index 5317229..3d22821 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -86,45 +86,44 @@ off of the queue, one after the other.  If no work
is queued, the
 worker threads become idle.  These worker threads are managed in so
 called thread-pools.
 
-Subsystems and drivers can create and queue work items on workqueues
-as they see fit.
-
-By default, workqueues are per-cpu.  Work items are queued and
-executed on the same CPU as the issuer.  These workqueues and work
-items are said to be "bound".  A workqueue can be specifically
-configured to be "unbound" in which case work items queued on the
-workqueue are executed by worker threads not bound to any specific
-CPU.
-
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
 which manages thread-pool and processes the queued work items.
 
-The backend mechanism is called gcwq.  There is one gcwq for each
+The backend is called gcwq.  There is one gcwq for each
 possible CPU and one gcwq to serve work items queued on unbound
 workqueues.
 
+Subsystems and drivers can create and queue work items through special
+workqueue API functions as they see fit. They can influence some
+aspects of the way the work items are executed by setting flags on the
+workqueue they are putting the work item on. These flags include
+things like cpu locality, reentrancy, concurrency limits and more. To
+get a detailed overview refer to the API description of
+alloc_workqueue() below. 
+
 When a work item is queued to a workqueue, the target gcwq is
 determined according to the queue parameters and workqueue attributes
-and queued on the shared worklist of the gcwq.  For example, unless
+and appended to the shared worklist of that gcwq.  For example, unless
 specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of the gcwq of the CPU the issuer is running
-on.
+queued on the worklist of exactly that gcwq that is associated to the 
+CPU the issuer is running on.
 
 For any worker pool implementation, managing the concurrency level (how
 many execution contexts are active) is an important issue.  cmwq tries
-to keep the concurrency at minimal but sufficient level.
+to keep the concurrency at a minimal but sufficient level. Minimal to
save
+resources and sufficient in that the system is used at it's full
capacity.
 
 Each gcwq bound to an actual CPU implements concurrency management by
 hooking into the scheduler.  The gcwq is notified whenever an active
 worker wakes up or sleeps and keeps track of the number of the
 currently runnable workers.  Generally, work items are not expected to
-hog CPU cycle and maintaining just enough concurrency to prevent work
-processing from stalling should be optimal.  As long as there is one
-or more runnable workers on the CPU, the gcwq doesn't start execution
-of a new work, but, when the last running worker goes to sleep, it
-immediately schedules a new worker so that the CPU doesn't sit idle
-while there are pending work items.  This allows using minimal number
+hog a CPU and consume many cycles. That means maintaining just enough 
+concurrency to prevent work processing from stalling should be
optimal.  
+As long as there is one or more runnable workers on the CPU, the gcwq 
+doesn't start execution of a new work, but, when the last running
worker goes
+to sleep, it immediately schedules a new worker so that the CPU
doesn't sit 
+idle while there are pending work items.  This allows using a minimal
number
 of workers without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ