lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c62985530910020728tf2cb581t5d7b7ef99f35395c@mail.gmail.com>
Date:	Fri, 2 Oct 2009 16:28:54 +0200
From:	Frédéric Weisbecker <fweisbec@...il.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	jeff@...zik.org, mingo@...e.hu, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, jens.axboe@...cle.com,
	rusty@...tcorp.com.au, cl@...ux-foundation.org,
	dhowells@...hat.com, arjan@...ux.intel.com
Subject: Re: [PATCH 19/19] workqueue: implement concurrency managed workqueue

2009/10/1 Tejun Heo <tj@...nel.org>:
> Currently each workqueue has its own dedicated worker pool.  This
> causes the following problems.
>
> * Works which are dependent on each other can cause a deadlock by
>  depending on the same execution resource.  This is bad because this
>  type of dependency is quite difficult to find.
>
> * Works which may sleep and take long time to finish need to have
>  separate workqueues so that it doesn't block other works.  Similarly
>  works which want to be executed in timely manner often need to
>  create it custom workqueue too to avoid being blocked by long
>  running ones.  This leads to large number of workqueues and thus
>  many workers.
>
> * The static one-per-cpu worker isn't good enough for jobs which
>  require higher level of concurrency necessiating other worker pool
>  mechanism.  slow-work and async are good examples and there are also
>  some custom implementations buried in subsystems.
>
> * Combined, the above factors lead to many workqueues with large
>  number of dedicated and mostly unused workers.  This also makes work
>  processing less optimal as the dedicated workers end up switching
>  among themselves costing scheduleing overhead and wasting cache
>  footprint for their stacks and as the system gets busy, these
>  workers end up competing with each other.
>
> To solve the above issues, this patch implements concurrency-managed
> workqueue.
>
> There is single global cpu workqueue (gcwq) for each cpu which serves
> all the workqueues.  gcwq maintains single pool of workers which is
> shared by all cwqs on the cpu.
>
> gcwq keeps the number of concurrent active workers to minimum but no
> less.  As long as there's one or more running workers on the cpu, no
> new worker is scheduled so that works can be processed in batch as
> much as possible but when the last running worker blocks, gcwq
> immediately schedules new worker so that the cpu doesn't sit idle
> while there are works to be processed.



That's really a cool thing.
So once such new workers are created, what's the state/event that triggers their
destruction?

Is it the following, propagated recursively?

    Worker A blocks.
    B is created.
    B has just finished a worklet and A has been woken up
    Then destroy B
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ