lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 7 Nov 2013 13:44:17 -0800
From:	Christoph Hellwig <hch@...radead.org>
To:	Douglas Gilbert <dgilbert@...erlog.com>
Cc:	SCSI development list <linux-scsi@...r.kernel.org>,
	James Bottomley <james.bottomley@...senpartnership.com>,
	vaughan <vaughan.cao@...cle.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH v2] sg: O_EXCL and other lock handling

On Wed, Nov 06, 2013 at 02:30:40PM -0500, Douglas Gilbert wrote:
> >during sg_open and sg_release, which are guranteed not to migrate
> >to a different process during their run time.
> 
> True. What I stated would be a problem if a mutex tried
> to do something similar to Vaughan's patch that was
> reverted just before lk 3.12.0. That held a read or write
> semaphore between a sg_open() and sg_release().
> 
> But your point begs the question why should any driver
> pay the extra cost and usability restrictions of a kernel
> mutex over a semaphore without good reason:

It allows features like spinning before going to sleep, as well
as helping with RT patches.  But this really isn't the place to
discuss this - the ship has sailed and just about any semaphore
used for basic mutal exclusion has been converted to a mutex.

Complaints about that should be directed to Linus and/or the locking
primitive maintainers.

> So what contexts can mutexes be used in?

They can be used in process context only.  Please make sure to also
run any patches changing locking behaviour with CONFIG_PROVE_LOCKING
enabled, as that will verify all this.

> Also, which one in the above list tells me that sg_open()
> cannot return to the caller with a mutex held? Given
> the special relationship guaranteed by the OS (POSIX)
> between sg_open() and sg_release() *** for the same file
> handle, if this is not allowed, why?

Because you always release it before returning.  Note that the
sparse static checker can verify that for you, if the functions
get to big, which sg_open might get close to.  Sparse is another
tool that I wish every patch author would run before submission.

Read Documentation/sparse.txt for details.

> >It always is called through the class interface remove_dev
> >method, which always is called under a lock.
> 
> This is an important piece of extra information. Does that
> mean sg_add() and sg_remove() can't wait on anything? I note
> that sg_add() calls sg_alloc() which immediately does:
>   sdp = kzalloc(sizeof(Sg_device), GFP_KERNEL);

You can sleep under mutexes just fine, although in general it
is preferable to sleep too long with mutexes held.

> Where are the preconditions from the scsi subsystem to a ULD
> stating:
>   - that sg_remove() and sg_add() are called from a non-interrupt,
>     non tasklet and non timer context?

The driver core uses sleeping calls like mutexes in all these code
pathes, so a call from these contexts will blow up immediately.

>   - invocations of sg_add() and sg_remove() are paired in a
>     similar fashion as sg_open() and sg_release()

That's how the class_interfaces work, easily verified by code
inspection.  Unfortunately these interfaces aren't really documented,
but a patch to Greg who is in Cc now will take care of that.

> By the second point I mean, for example, that sg_remove()
> cannot be called twice for the same device without a sg_add()
> between them to add (back) that device.

That is correct.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ