[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120327224837.GC22371@google.com>
Date: Tue, 27 Mar 2012 15:48:37 -0700
From: Tejun Heo <tj@...nel.org>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: Stephen Boyd <sboyd@...eaurora.org>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Saravana Kannan <skannan@...eaurora.org>,
Kay Sievers <kay.sievers@...y.org>,
Greg KH <gregkh@...uxfoundation.org>,
Christian Lamparter <chunkeey@...glemail.com>,
"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>,
alan@...rguk.ukuu.org.uk,
Linux PM mailing list <linux-pm@...r.kernel.org>
Subject: Re: [PATCH 2/2] firmware_class: Move request_firmware_nowait() to
workqueues
On Wed, Mar 28, 2012 at 12:21:27AM +0200, Rafael J. Wysocki wrote:
> On Wednesday, March 28, 2012, Tejun Heo wrote:
> > On Tue, Mar 27, 2012 at 02:28:30PM -0700, Stephen Boyd wrote:
> > > Oddly enough a work_struct was already part of the firmware_work
> > > structure but nobody was using it. Instead of creating a new
> > > kthread for each request_firmware_nowait() call just schedule the
> > > work on the long system workqueue. This should avoid some overhead
> > > in forking new threads when they're not strictly necessary.
> > >
> > > Signed-off-by: Stephen Boyd <sboyd@...eaurora.org>
> > > ---
> > >
> > > Is it better to use alloc_workqueue() and not put these on the system
> > > long workqueue?
> >
> > No, just use schedule_work() unless there are specific requirements
> > which can't be fulfilled that way (e.g. it's on memory allocation
> > path, may consume large amount of cpu cycles, ...)
>
> It may wait quite long.
That shouldn't matter. system_long_wq's name is a bit misleading at
this point. The only reason it's used currently is to avoid cyclic
dependency involving flush_workqueue(), which calls for clearer
solution anyway. So, yeap, using system_wq should be fine here.
Thank you.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists