[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D0914B5.20208@kernel.org>
Date: Wed, 15 Dec 2010 20:19:17 +0100
From: Tejun Heo <tj@...nel.org>
To: James Bottomley <James.Bottomley@...e.de>
CC: Linux SCSI List <linux-scsi@...r.kernel.org>,
FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] scsi: don't use execute_in_process_context()
Hello,
On 12/15/2010 08:10 PM, James Bottomley wrote:
>> Yes, it would do, but we were already too far with the existing
>> implementation and I don't agree we need more when replacing it with
>> usual workqueue usage would remove the issue. So, when we actually
>> need them, let's consider that or any other way to do it, please.
>> A core API with only a few users which can be easily replaced isn't
>> really worth keeping around. Wouldn't you agree?
>
> Not really ... since the fix is small and obvious.
IMHO, it's a bit too subtle to be a good API. The callee is called
under different (locking) context depending on the callsite and I've
been already bitten enough times from implicit THIS_MODULEs. Both
properties increase possbility of introducing problems which can be
quite difficult to detect and reproduce.
> Plus now it can't be moved into SCSI because I need the unremovable
> call chain.
Yes, with the proposed change, it cannot be moved to SCSI.
> Show me how you propose to fix it differently first, since we both agree
> the initial attempt doesn't work, and we can take the discussion from
> there.
Given that the structures containing the work items are dynamically
allocated, I would introduce a scsi_wq, unconditionally schedule
release works on them and flush them before unloading. Please note
that workqueues no longer require dedicated threads, so it's quite
cheap.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists