lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1292510372.3024.12.camel@mulgrave.site>
Date:	Thu, 16 Dec 2010 09:39:32 -0500
From:	James Bottomley <James.Bottomley@...e.de>
To:	Tejun Heo <tj@...nel.org>
Cc:	Linux SCSI List <linux-scsi@...r.kernel.org>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] scsi: don't use execute_in_process_context()

On Wed, 2010-12-15 at 20:42 +0100, Tejun Heo wrote:
> On 12/15/2010 08:33 PM, James Bottomley wrote:
> > A single flush won't quite work.  The target is a parent of the device,
> > both of which release methods have execute_in_process_context()
> > requirements.  What can happen here is that the last put of the device
> > will release the target (from the function).  If both are moved to
> > workqueues, a single flush could cause the execution of the device work,
> > which then queues up target work (and makes it still pending).  A double
> > flush will solve this (because I think our nesting level doesn't go
> > beyond 2) but it's a bit ugly ...
> 
> Yeap, that's an interesting point actually.  I just sent the patch
> butn there is no explicit flush.  It's implied by destroy_work() and
> it has been a bit bothering that destroy_work() could exit with
> pending works if execution of the current one produces more.  I was
> pondering making destroy_workqueue() actually drain all the scheduled
> works and maybe trigger a warning if it seems to loop for too long.
> 
> But, anyways, I don't think that's gonna happen here.  If the last put
> hasn't been executed the module reference wouldn't be zero, so module
> unload can't initiate, right?

Wrong I'm afraid.  There's a nasty two level complexity in module
references:  Anything which takes an external reference (like open or
mount) does indeed take the module reference and prevent removal.
Anything that takes an internal reference doesn't ... we wait for all of
them to come back in the final removal of the bus type.  The is to
prevent a module removal deadlock.  The callbacks are internal
references, so we wait for them in module_exit() but don't block
module_exit() from being called ... meaning the double callback scenario
could be outstanding.

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ