[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A8C2012.6090809@gmail.com>
Date: Thu, 20 Aug 2009 00:53:54 +0900
From: Tejun Heo <htejun@...il.com>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
CC: Jeff Garzik <jeff@...zik.org>, Jens Axboe <jens.axboe@...cle.com>,
Mark Lord <liml@....ca>, linux-ide@...r.kernel.org,
linux-kernel@...r.kernel.org,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH] libata: use single threaded work queue
Hello, Alan.
Alan Cox wrote:
> Somewhat simpler for the general case would be to implement
Yeah, it would be great if there's a much simpler solution. Latency
based one could be a good compromise.
> create_workqueue(min_threads,max_threads, max_thread_idle_time);
> queue_work_within();
>
> at which point you can queue work to the workqueue but with a per
> workqueue timing running so you know when you might need to create a new
> thread if the current work hasn't finished. Idle threads would then sleep
> and either expire or pick up new work - so that under load we don't keep
> destructing threads.
Are work threads per workqueue? Combined with per-cpu binding,
dynamic thread pool per workqueue can get quite messy. All three
factors end up getting multiplied - ie. #cpus * pool_size which can be
enlarged by the same work hopping around * #workqueues.
> That might need a single thread (for the system) that does nothing but
> create workqueue threads to order. It could manage the "next workqueue
> deadline" timer and thread creation.
Another problem is that if we apply this to the existing default
workqueue which is used by many different supposed-to-be-short works
in essentially batch mode, we might end up enlarging cache footprint
by scheduling unnecessarily many threads, which, in tight situations,
might show up as small but noticeable performance regression.
> The threads would then pick up their work queue work. There is an
> intrinsic race where if we are just on the limit we might create a
> thread just as there is no work to be done - but it would be rare
> and would just then go away.
Agreed. As long as the pool size is reduced gradually maybe with some
buffer, I don't think this would be an issue.
> I see no point tring to fix ata when applying sanity to the workqueue
> logic would sort a lot of other cases out nicely.
About the same problem exists for in-kernel media presence polling.
Currently, I'm thinking about creating a worker thread per any device
which requires polling. It isn't too bad but it would still be far
better if I can just schedule a work and don't have to worry about
managing concurrency.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists