[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110615101831.GU8141@htj.dyndns.org>
Date: Wed, 15 Jun 2011 12:18:31 +0200
From: Tejun Heo <tj@...nel.org>
To: James Bottomley <James.Bottomley@...e.de>
Cc: Linux SCSI List <linux-scsi@...r.kernel.org>,
FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
lkml <linux-kernel@...r.kernel.org>,
Steven Whitehouse <swhiteho@...hat.com>
Subject: Re: [PATCH RESEND] scsi: don't use execute_in_process_context()
On Sat, Apr 30, 2011 at 04:56:02PM +0200, Tejun Heo wrote:
> SCSI is the only subsystem which uses execute_in_process_context() and
> its use is racy against module unload. ie. the reap work is not
> properly flushed and could still be running after the scsi module is
> unloaded.
>
> Although execute_in_process_context() can be more efficient when the
> caller already has a context, in this case, the call paths are quite
> cold and the difference is practically meaningless. With commit
> c8efcc25 (workqueue: allow chained queueing during destruction), the
> race condition can easily be fixed by using a dedicated workqueue and
> destroying it on module unload.
>
> Create and use scsi_wq instead of execute_in_process_context().
>
> * scsi_device->ew is replaced with release_work. scsi_target->ew is
> replaced with reap_work.
>
> * Both works are initialized with the respective release/reap function
> during device/target init. scsi_target_reap_usercontext() is moved
> upwards to avoid needing forward declaration.
>
> * scsi_alloc_target() now explicitly flushes the reap_work of the
> found dying target before putting it instead of depending on
> flush_scheduled_work().
>
> For more info on the issues, please read the following threads.
>
> http://thread.gmane.org/gmane.linux.scsi/62923
> http://thread.gmane.org/gmane.linux.kernel/1124773
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Cc: Steven Whitehouse <swhiteho@...hat.com>
James, ping?
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists