lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A8D49F5.9090806@gmail.com>
Date:	Thu, 20 Aug 2009 22:04:53 +0900
From:	Tejun Heo <htejun@...il.com>
To:	Jens Axboe <jens.axboe@...cle.com>
CC:	Frederic Weisbecker <fweisbec@...il.com>,
	linux-kernel@...r.kernel.org, jeff@...zik.org,
	benh@...nel.crashing.org, bzolnier@...il.com,
	alan@...rguk.ukuu.org.uk,
	Andrew Morton <akpm@...ux-foundation.org>,
	Oleg Nesterov <oleg@...hat.com>
Subject: Re: [PATCH 0/6] Lazy workqueues

Jens Axboe wrote:
> On Thu, Aug 20 2009, Frederic Weisbecker wrote:
>> A idea to solve this:
>>
>> We could have one thread per struct work_struct.  Similarly to this
>> patchset, this thread waits for queuing requests, but only for this
>> work struct.  If the target cpu has no thread for this work, then
>> create one, like you do, etc...
>>
>> Then the idea is to have one workqueue per struct work_struct, which
>> handles per cpu task creation, etc... And this workqueue only handles
>> the given work.
>>
>> That may solve the deadlocks scenario that are often reported and lead
>> to dedicated workqueue creation.
>>
>> That also makes disappearing the work execution serialization between
>> different worklets. We just keep the serialization between same work,
>> which seems a pretty natural thing and is less haphazard than multiple
>> works of different natures randomly serialized between them.
>>
>> Note the effect would not only be a reducing of deadlocks but also
>> probably an increasing of throughput because works of different
>> natures won't need anymore to wait for the previous one completion.
>>
>> Also a reducing of latency (a high prio work that waits for a lower
>> prio work).
>>
>> There are good chances that we won't need any more per driver/subsys
>> workqueue creation after that, because everything would be per
>> worklet.  We could use a single schedule_work() for all of them and
>> not bother choosing a specific workqueue or the central events/%d
>>
>> Hmm?
> 
> I pretty much agree with you, my initial plan for a thread pool would be
> very similar. I'll gradually work towards that goal.

Several issues that come to my mind with the above approach are...

* There will still be cases where you need fixed dedicated thread.
  Execution resources for anything which might be used during IO needs
  to be preallocated (at least some of it) to guarantee forward
  progress.

* Depending on how popular works are used (and I think their usage
  will grow with improvements like this), we might end up with many
  idling threads again and please note that thread creation /
  destruction is quite costly compared to what works usually do.

* Having different threads executing different works all the time
  might improve latency but if works are used frequently enough it's
  likely to lower throughput because short works which can be handled
  in batch by a single thread now needs to be handled by different
  threads.  Scheduling overhead can be significant compared to what
  those works actually do and it will also cost much more cache
  footprint-wise.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ