lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A8D458B.6020208@gmail.com>
Date:	Thu, 20 Aug 2009 21:46:03 +0900
From:	Tejun Heo <htejun@...il.com>
To:	Alan Cox <alan@...rguk.ukuu.org.uk>
CC:	Jeff Garzik <jeff@...zik.org>, Jens Axboe <jens.axboe@...cle.com>,
	Mark Lord <liml@....ca>, linux-ide@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH] libata: use single threaded work queue

Hello, Alan.

Alan Cox wrote:
>> It's not about needing per-cpu binding but if works can be executed on
>> the same cpu they were issued, it's almost always beneficial.  The
>> only reason why we have single threaded workqueue now is to limit the
>> number of threads.
> 
> That would argue very strongly for putting all the logic in one place so
> everything shares queues.

Yes, it does.

>>> Only if you make the default assumed max wait time for the work too low -
>>> its a tunable behaviour in fact.
>> If the default workqueue is made to manage concurrency well, most
>> works should be able to just use it, so the queue will contain both
>> long running ones and short running ones which can disturb the current
>> batch like processing of the default workqueue which is assumed to
>> have only short ones.
> 
> Not sure why it matters - the short ones will instead end up being
> processed serially in parallel to the hog.

The problem is how to assign works to workers.  With long running
works, workqueue will definitely need some reserves in the worker
pool.  When short works are consecutively queued, without special
provision, they'll end up served by different workers increasing cache
foot print and execution overhead.  The special provision could be
something timer based but modding timer for each work is a bit
expensive.  I think it needs to be more mechanical rather than depend
on heuristics or timing.

>> kthreads).  It would be great if a single work API is exported and
>> concurrency is managed automatically so that no one else has to worry
>> about concurrency but achieving that requires much more intelligence
>> on the workqueue implementation as the basic concurrency policies
>> which used to be imposed by those segregations need to be handled
>> automatically.  Maybe it's better trade-off to leave those
>> segregations as-are and just add another workqueue type with dynamic
>> thread pool.
> 
> The more intelligence in the workqueue logic, the less in the drivers and
> the more it can be adjusted and adapt itself.

Yeap, sure.

> Consider things like power management which might argue for breaking
> the cpu affinity to avoid waking up a sleeping CPU in preference to
> jumping work between processors

Yeah, that's one thing to consider too but works being scheduled on a
particular cpu usually is the result of other activities going on the
cpu.  I don't think workqueue needs to be modified for that.  If other
things move, workqueue will automatically follow.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ