lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110317132634.GB12760@beardog.cce.hp.com>
Date:	Thu, 17 Mar 2011 08:26:34 -0500
From:	scameron@...rdog.cce.hp.com
To:	Arnd Bergmann <arnd@...db.de>
Cc:	linux-kernel@...r.kernel.org, axboe@...nel.dk, dab@...com,
	mike.miller@...com, scameron@...rdog.cce.hp.com
Subject: Re: Question about driver ioctl, big kernel lock, mutexes

On Wed, Mar 16, 2011 at 09:40:34PM +0100, Arnd Bergmann wrote:
> On Wednesday 16 March 2011 20:47:26 scameron@...rdog.cce.hp.com wrote:
> > On Wed, Mar 16, 2011 at 08:06:14PM +0100, Arnd Bergmann wrote:
> 
> > > Absolutely. The best way forward would be to prove that the mutex
> > > I introduced is actually pointless, or alternatively change the code
> > > so that the mutex is only held in the places where it is really
> > > required, if any.
> > 
> > Pretty sure the ioctl stuff is already thread safe, but I will look at it
> > again.
> > 
> > There are a couple of other places the mutex is used, open, release, I think
> > one other place, that I'm less sure about, only because I haven't thought
> > about them.  It should be a safe assumption that nothing outside
> > the driver depends on these mutexes (obviously, since they're declared
> > only in the driver) so that should make it pretty easy to figure out.
> 
> Ah, right. The open/release functions use access the two usage_count
> variables in an unsafe way. The mutex serializes between the two,
> but there are other functions that look at the same data without
> taking the mutex. It would be good to make this safer in some
> way.
> 
> I think turning it into an atomic_t would be enough, unless the
> two usage counts need to be strictly taken atomically or in the
> context of something else.
> 
> The rest of open/close looks fine to me without the mutex.
> 
> > > When I submitted the patches, I always tried to make it clear that
> > > most of the new mutexes are not required at all, but that proving
> > > this for every single driver takes a lot of hard work. 
> > > 
> > > Having a mutex that should not be there causes a performance regression
> > > or a deadlock, both of which are fairly easy to bisect and fix, but
> > > missing a mutex because of a subtle BKL dependency would have meant
> > > introducing a race condition that would be extremely hard to analyse.
> > > 
> > > Limiting the number of concurrent I/Os submitted to a device might
> > > be a good idea, but I think that's a separate discussion.
> > 
> > Ok.  Thanks for the explanation and confirmation of what I was thinking.
> > I'll start working on a fix for this.  May take me a little while as 
> > there are a bunch of other things I have to work on, but it's been
> > this way since June of 2010 so far with no complaints (that I know of),
> > so I guess it won't hurt too much for it to be this way for a little
> > while longer.
> 
> I'm curious: what is even the intention behind the passthru ioctls?
> Do you have applications that use them a lot?

Yes.  The main user would be the HP Array configuration utility.
Also SNMP storage agents, and SMX providers for storage.  And things
like cciss_vol_status and other utils here: 
http://cciss.sourceforge.net/#cciss_utils

The passthru ioctl is how new logical drives are configured, added, removed,
etc. RAID levels can be changed, migrated, etc. on the fly, and lots of other
stuff.  Essentially the firmware on the Smart Array boards has a ton of Smart
Array specific functionality which can be controlled through the pass through
ioctls.

-- steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ