[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <adad3p2hony.fsf@cisco.com>
Date: Wed, 15 Dec 2010 10:33:05 -0800
From: Roland Dreier <rdreier@...co.com>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, Roland Dreier <rolandd@...co.com>,
Sean Hefty <sean.hefty@...el.com>,
Hal Rosenstock <hal.rosenstock@...il.com>
Subject: Re: [PATCH 01/30] infiniband: update workqueue usage
Thanks Tejun. A couple questions:
> * ib_wq is added, which is used as the common workqueue for infiniband
> instead of the system workqueue. All system workqueue usages
> including flush_scheduled_work() callers are converted to use and
> flush ib_wq. This is to prepare for deprecation of
> flush_scheduled_work().
Why do we want to move to a subsystem-specific workqueue? Can we just
replace flush_scheduled_work() by cancel_delayed_work_sync() as
appropriate and not create yet another work queue?
> * qib_wq is removed and ib_wq is used instead.
You obviously looked at the comment
- /*
- * We create our own workqueue mainly because we want to be
- * able to flush it when devices are being removed. We can't
- * use schedule_work()/flush_scheduled_work() because both
- * unregister_netdev() and linkwatch_event take the rtnl lock,
- * so flush_scheduled_work() can deadlock during device
- * removal.
- */
- qib_wq = create_workqueue("qib");
and know that with the new workqueue stuff, this issue no longer
exists. But for both my education and also the clarity of the changelog
for this patch, perhaps you could expand on why ib_wq is safe here.
> * create[_singlethread]_workqueue() usages are replaced with the new
> alloc[_ordered]_workqueue(). This removes rescuers from all
> infiniband workqueues.
What are rescuers?
Can we replace some of these driver-specific work queues by the ib_wq?
Are all these things just possibilities for future cleanup?
Thanks,
Roland
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists